Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 731
Filtrar
1.
Brain Lang ; 252: 105413, 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38608511

RESUMO

Sign languages (SLs) are expressed through different bodily actions, ranging from re-enactment of physical events (constructed action, CA) to sequences of lexical signs with internal structure (plain telling, PT). Despite the prevalence of CA in signed interactions and its significance for SL comprehension, its neural dynamics remain unexplored. We examined the processing of different types of CA (subtle, reduced, and overt) and PT in 35 adult deaf or hearing native signers. The electroencephalographic-based processing of signed sentences with incongruent targets was recorded. Attenuated N300 and early N400 were observed for CA in deaf but not in hearing signers. No differences were found between sentences with CA types in all signers, suggesting a continuum from PT to overt CA. Deaf signers focused more on body movements; hearing signers on faces. We conclude that CA is processed less effortlessly than PT, arguably because of its strong focus on bodily actions.

2.
Front Psychol ; 15: 1379593, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38629031

RESUMO

Although research into multimodal stance-taking has gained momentum over the past years, the multimodal construction of so-called stacked stances has not yet received systematic attention in the literature. Mocking enactments are a prime example of such complex social actions as they are layered both interactionally and stance-related, and they rely significantly on the use of bodily visual resources, depicting rather than describing events and stances. Using Du Bois' Stance Triangle as a framework, this study investigates mocking enactments as a case study to unravel the multimodal aspects of layered stance expressions. Drawing on three data sets-music instruction in Dutch, German, and English, spontaneous face-to-face interactions among friends in Dutch, and narrations on past events in Flemish Sign Language (VGT)-this study provides a qualitative exploration of mocking enactments across different communicative settings, languages, and modalities. The study achieves three main objectives: (1) illuminating how enactments are used for mocking, (2) identifying the layers of stance-taking at play, and (3) examining the multimodal construction of mocking enactments. Our analysis reveals various different uses of enactments for mocking. Aside from enacting the target of the mockery, participants can include other characters and viewpoints, highlighting the breadth of the phenomenon under scrutiny. Second, we uncover the layered construction of stance on all axes of the Stance Triangle (evaluation, positioning, and alignment). Third, we find that mocking enactments are embedded in highly evaluative contexts, indexed by the use of bodily visual resources. Interestingly, not all mocking enactments include a multimodally exaggerated depiction, but instead, some merely allude to an absurd hypothetical scenario. Our findings contribute to the growing body of literature on multimodal stance-taking, by showing how a nuanced interpretation of the Stance Triangle can offer a useful framework for analyzing layered stance acts.

3.
Int J Pediatr Otorhinolaryngol ; 179: 111930, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38579404

RESUMO

BACKGROUND: Deaf and hard of hearing (DHH) children may experience communication delays, irrespective of early intervention and technology. Australian Sign Language (Auslan) is one approach in early intervention to address language delays. Current prevalence of Auslan use among Australian families with DHH children is unknown. AIMS: The first aim was to determine the proportion of families enrolled in an Australian statewide hearing loss databank who use Auslan with their DHH child. The second aim was to explore the relationships between indicators of child hearing loss (bilateral or unilateral hearing loss, degree of hearing loss, and device use: hearing aids and cochlear implants), family factors (maternal education, attendance at early intervention, family history of deafness, and socio-economic disadvantage) and the family's reported use of Auslan. METHODS: We analysed the enrolment data from 997 families who participated in an Australian statewide hearing loss databank between 2012 and 2021. We described the proportion of families who used Auslan with their DHH child at home. The association between indicators of child hearing loss and family factors, and the parental reports of communication approach were examined using correlation analyses. RESULTS: Eighty-seven of 997 parents (8.7%) reported using Auslan with their DHH child. Of these, 26 (2.6%) used Auslan as their primary language. The use of Auslan at home was associated with the following indicators of child hearing loss: bilateral hearing loss, profound compared to mild hearing loss, and cochlear implant and hearing aid use compared to no device use. The family factors associated with the use of Auslan were: referral or attendance at early intervention compared to those who did not attend, and a family history of deafness compared to those with none. No association was found between maternal education and socio-economic disadvantage and the use of Auslan. CONCLUSION: This Australian study found a low proportion (8.7%) of families with a DHH child who reported using Auslan. Seven child hearing loss and family factors were considered, and five were significantly associated with using Auslan at home. Children with a greater degree of hearing loss, attendance at early intervention and family history of deafness tended to use Auslan.


Assuntos
Surdez , Auxiliares de Audição , Perda Auditiva , Pessoas com Deficiência Auditiva , Criança , Humanos , Surdez/epidemiologia , Surdez/cirurgia , Surdez/reabilitação , Austrália/epidemiologia , Perda Auditiva/epidemiologia
4.
Sensors (Basel) ; 24(5)2024 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-38475008

RESUMO

Sign language serves as the primary mode of communication for the deaf community. With technological advancements, it is crucial to develop systems capable of enhancing communication between deaf and hearing individuals. This paper reviews recent state-of-the-art methods in sign language recognition, translation, and production. Additionally, we introduce a rule-based system, called ruLSE, for generating synthetic datasets in Spanish Sign Language. To check the usefulness of these datasets, we conduct experiments with two state-of-the-art models based on Transformers, MarianMT and Transformer-STMC. In general, we observe that the former achieves better results (+3.7 points in the BLEU-4 metric) although the latter is up to four times faster. Furthermore, the use of pre-trained word embeddings in Spanish enhances results. The rule-based system demonstrates superior performance and efficiency compared to Transformer models in Sign Language Production tasks. Lastly, we contribute to the state of the art by releasing the generated synthetic dataset in Spanish named synLSE.


Assuntos
Aprendizado Profundo , Humanos , Língua de Sinais , Audição , Comunicação
5.
Lang Acquis ; 31(2): 85-99, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38510461

RESUMO

Most deaf children have hearing parents who do not know a sign language at birth, and are at risk of limited language input during early childhood. Studying these children as they learn a sign language has revealed that timing of first-language exposure critically shapes language outcomes. But the input deaf children receive in their first language is not only delayed, it is much more variable than most first language learners, as many learn their first language from parents who are themselves new sign language learners. Much of the research on deaf children learning a sign language has considered the role of parent input using broad strokes, categorizing hearing parents as non-native, poor signers, and deaf parents as native, strong signers. In this study, we deconstruct these categories, and examine how variation in sign language skills among hearing parents might affect children's vocabulary acquisition. This study included 44 deaf children between 8- and 60-months-old who were learning ASL and had hearing parents who were also learning ASL. We observed an interactive effect of parent ASL proficiency and age, such that parent ASL proficiency was a significant predictor of child ASL vocabulary size, but not among the infants and toddlers. The proficiency of language models can affect acquisition above and beyond age of acquisition, particularly as children grow. At the same time, the most skilled parents in this sample were not as fluent as "native" deaf signers, and yet their children reliably had age-expected ASL vocabularies. Data and reproducible analyses are available at https://osf.io/9ya6h/.

6.
Glob Pediatr Health ; 11: 2333794X241240302, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38529336

RESUMO

Aim. This study aimed to assess the effectiveness of 3 interventions-skit video, pictorial, and sign language-in improving the oral hygiene of children with hearing impairment. Materials and Methods. Sixty children randomly divided into 3 groups: Skit video, Pictorial, and Sign language. The mean gingival and Oral Hygiene Index scores were recorded before and after interventions. A 1-way ANOVA was used for statistically significant difference between pre and post intervention scores. Results. A significant difference in mean oral hygiene and gingival index scores before and after interventions was found in Group A (P < .005). A statistically significant difference was also found between group A and B in inter group comparison of OHI and GI scores post intervention (P < .004). Conclusion. Skit video and pictorial intervention effectively improves oral health resulting in reduced mean oral hygiene and gingival scores.

7.
Brain Imaging Behav ; 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38523177

RESUMO

Employing functional magnetic resonance imaging (fMRI) techniques, we conducted a comprehensive analysis of neural responses during sign language, picture, and word processing tasks in a cohort of 35 deaf participants and contrasted these responses with those of 35 hearing counterparts. Our voxel-based analysis unveiled distinct patterns of brain activation during language processing tasks. Deaf individuals exhibited robust bilateral activation in the superior temporal regions during sign language processing, signifying the profound neural adaptations associated with sign comprehension. Similarly, during picture processing, the deaf cohort displayed activation in the right angular, right calcarine, right middle temporal, and left angular gyrus regions, elucidating the neural dynamics engaged in visual processing tasks. Intriguingly, during word processing, the deaf group engaged the right insula and right fusiform gyrus, suggesting compensatory mechanisms at play during linguistic tasks. Notably, the control group failed to manifest additional or distinctive regions in any of the tasks when compared to the deaf cohort, underscoring the unique neural signatures within the deaf population. Multivariate Pattern Analysis (MVPA) of functional connectivity provided a more nuanced perspective on connectivity patterns across tasks. Deaf participants exhibited significant activation in a myriad of brain regions, including bilateral planum temporale (PT), postcentral gyrus, insula, and inferior frontal regions, among others. These findings underscore the intricate neural adaptations in response to auditory deprivation. Seed-based connectivity analysis, utilizing the PT as a seed region, revealed unique connectivity pattern across tasks. These connectivity dynamics provide valuable insights into the neural interplay associated with cross-modal plasticity.

8.
J Biomech ; 165: 112011, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38382174

RESUMO

Prior studies suggest that native (born to at least one deaf or signing parent) and non-native signers have different musculoskeletal health outcomes from signing, but the individual and combined biomechanical factors driving these differences are not fully understood. Such group differences in signing may be explained by the five biomechanical factors of American Sign Language that have been previously identified: ballistic signing, hand and wrist deviations, work envelope, muscle tension, and "micro" rests. Prior work used motion capture and surface electromyography to collect joint kinematics and muscle activations, respectively, from ten native and thirteen non-native signers as they signed for 7.5 min. Each factor was individually compared between groups. A factor analysis was used to determine the relative contributions of each biomechanical factor between signing groups. No significant differences were found between groups for ballistic signing, hand and wrist deviations, work envelope volume, excursions from recommended work envelope, muscle tension, or "micro" rests. Factor analysis revealed that "micro" rests had the strongest contribution for both groups, while hand and wrist deviations had the weakest contribution. Muscle tension and work envelope had stronger contributions for native compared to non-native signers, while ballistic signing had a stronger contribution for non-native compared to native signers. Using a factor analysis enabled discernment of relative contributions of biomechanical variables across native and non-native signers that could not be detected through isolated analysis of individual measures. Differences in the contributions of these factors may help explain the differences in signing across native and non-native signers.


Assuntos
Mãos , Língua de Sinais , Humanos , Estados Unidos , Extremidade Superior , Punho , Análise Fatorial
9.
Dev Sci ; : e13481, 2024 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-38327110

RESUMO

Recent evidence suggests that deaf children with CIs exposed to nonnative sign language from hearing parents can attain age-appropriate vocabularies in both sign and spoken language. It remains to be explored whether deaf children with CIs who are exposed to early nonnative sign language, but only up to implantation, also benefit from this input and whether these benefits also extend to memory abilities, which are strongly linked to language development. The present study examined the impact of deaf children's early short-term exposure to nonnative sign input on their spoken language and their phonological memory abilities. Deaf children who had been exposed to nonnative sign input before and after cochlear implantation were compared to deaf children who never had any exposure to sign input as well as to children with typical hearing. The children were between 5;1 and 7;1 years of age at the time of testing and were matched on age, sex, and socioeconomic status. The results suggest that even short-term exposure to nonnative sign input has positive effects on general language and phonological memory abilities as well as on nonverbal working memory-with total length of exposure to sign input being the best predictor of deaf children's performance on these measures. The present data suggest that even access to early short-term nonnative visual language input is beneficial for the language and phonological memory abilities of deaf children with cochlear implants, suggesting also that parents should not be discouraged from learning and exposing their child to sign language. RESEARCH HIGHLIGHTS: This is the first study to examine the effects of early short-term exposure to nonnative sign input on French-speaking children with cochlear implants' spoken language and memory abilities. Early short-term nonnative exposure to sign input can have positive consequences for the language and phonological memory abilities of deaf children with CIs. Extended exposure to sign input has some additional and important benefits, allowing children to perform on par with children with typical hearing.

10.
J Neurosci ; 44(13)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38383498

RESUMO

Within the intricate matrices of cognitive neuroscience, auditory deprivation acts as a catalyst, propelling a cascade of neuroanatomical adjustments that have, until now, been suboptimally articulated in extant literature. Addressing this gap, our study harnesses high-resolution 3 T MRI modalities to unveil the multifaceted cortical transformations that emerge in tandem with congenital auditory deficits. We conducted a rigorous cortical surface analysis on a cohort of 90 congenitally deaf individuals, systematically compared with 90 normoacoustic controls. Our sample encompassed both male and female participants, ensuring a gender-inclusive perspective in our analysis. Expected alterations within prototypical auditory domains were evident, but our findings transcended these regions, spotlighting modifications dispersed across a gamut of cortical and subcortical structures, thereby epitomizing the cerebral adaptive dynamics to sensory voids. Crucially, the study's innovative methodology integrated two pivotal variables: the duration of auditory deprivation and the extent of sign language immersion. By intersecting these metrics with structural changes, our analysis unveiled nuanced layers of cortical reconfigurations, elucidating a more granulated understanding of neural plasticity. This intersectional approach bestows a unique advantage, allowing for a discerning exploration into how varying durations of sensory experience and alternative communication modalities modulate the brain's morphological terrain. In encapsulating the synergy of neuroimaging finesse and incisive scientific rigor, this research not only broadens the current understanding of adaptive neural mechanisms but also paves the way for tailored therapeutic strategies, finely attuned to individual auditory histories and communicative repertoires.


Assuntos
Córtex Auditivo , Surdez , Humanos , Masculino , Feminino , Imageamento por Ressonância Magnética , Córtex Auditivo/diagnóstico por imagem , Plasticidade Neuronal
11.
Sensors (Basel) ; 24(3)2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38339542

RESUMO

Japanese Sign Language (JSL) is vital for communication in Japan's deaf and hard-of-hearing community. But probably because of the large number of patterns, 46 types, there is a mixture of static and dynamic, and the dynamic ones have been excluded in most studies. Few researchers have been working to develop a dynamic JSL alphabet, and their performance accuracy is unsatisfactory. We proposed a dynamic JSL recognition system using effective feature extraction and feature selection approaches to overcome the challenges. In the procedure, we follow the hand pose estimation, effective feature extraction, and machine learning techniques. We collected a video dataset capturing JSL gestures through standard RGB cameras and employed MediaPipe for hand pose estimation. Four types of features were proposed. The significance of these features is that the same feature generation method can be used regardless of the number of frames or whether the features are dynamic or static. We employed a Random forest (RF) based feature selection approach to select the potential feature. Finally, we fed the reduced features into the kernels-based Support Vector Machine (SVM) algorithm classification. Evaluations conducted on our proprietary newly created dynamic Japanese sign language alphabet dataset and LSA64 dynamic dataset yielded recognition accuracies of 97.20% and 98.40%, respectively. This innovative approach not only addresses the complexities of JSL but also holds the potential to bridge communication gaps, offering effective communication for the deaf and hard-of-hearing, and has broader implications for sign language recognition systems globally.


Assuntos
Reconhecimento Automatizado de Padrão , Língua de Sinais , Humanos , Japão , Reconhecimento Automatizado de Padrão/métodos , Mãos , Algoritmos , Gestos
13.
Data Brief ; 53: 110080, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38328296

RESUMO

Nepali Sign Language (NSL) is used by the Nepali-speaking community in Nepal and in Indian states such as Sikkim, the hilly region of North Bengal, some parts of Uttarakhand, Meghalaya, and Assam. It consists of the International Manual Alphabet (A-Z), Nepali consonants, vowels, conjunct letters, and numbers represented in the form of one-handed fingerspelling or Nepali manual alphabet. The standard gestures for NSL have been published by the Nepal National Federation of the Deaf & Hard of Hearing (NFDH). To learn Nepali Sign Language, the first step is to understand its alphabet set. The use of technology can help ease the learning process. One of the application areas of computer vision is translating sign language gestures to either text or audio to facilitate communication. This is an open research area. However, NSL translation is one of the less explored research areas because there is no dataset available to work on for NSL. This paper introduces the Nepali Sign Language Dataset (NSL23), which is the first of its kind and includes vowels and consonants of the Nepali Sign Language alphabet. The dataset consists of .mov videos performed by 14 volunteers who have demonstrated 36 consonant signs and 13 vowel signs either in one full video or character by character. The dataset has been prepared under various conditions, including normal lighting, dark lighting conditions, prepared environments, unprepared environments, and real-world environments. The volunteers who performed the NSL gesture have been classified as 9 beginners who are using NSL for the first time and 5 experts who have been using NSL for 5 to 25 years. NSL23 contains 630 total videos representing 1205 gestures. The dataset can be used to train machine learning models to classify the alphabet set of NSL and further develop a sign language translator.

14.
Digit Health ; 10: 20552076241228432, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38333634

RESUMO

Background: Ineffective communication with Deaf individuals in healthcare settings has led to poor outcomes including miscommunication, waste, and errors. To help address these challenges, we developed a mobile app, Deaf in Touch Everywhere (DITETM) which aims to connect the Deaf community in Malaysia with a pool of off-site interpreters through secure video conferencing. Objectives: The aims of this study were to (a) assess the feasibility and acceptability of measuring unified theory of acceptance and use of technology (UTAUT) constructs for DITETM with the Deaf community and Malaysian sign language (BIM) interpreters and (b) seek input from Deaf people and BIM interpreters on DITETM to improve its design. Methods: Two versions of the UTAUT questionnaire were adapted for BIM interpreters and the Deaf community. Participants were recruited from both groups and asked to test the DITE app features over a 2-week period. They then completed the questionnaire and participated in focus group discussions to share their feedback on the app. Results: A total of 18 participants completed the questionnaire and participated in the focus group discussions. Ratings of performance expectancy, effort expectancy, facilitating conditions and behavioural intention were high across both groups, and suggestions were provided to improve the app. High levels of engagement suggest that measurement of UTAUT constructs with these groups (through a modified questionnaire) is feasible and acceptable. Conclusions: The process of engaging end users in the design process provided valuable insights and will help to ensure that the DITETM app continues to address the needs of both the Deaf community and BIM interpreters in Malaysia.

15.
J Cancer Educ ; 2024 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-38411867

RESUMO

Deaf, deafblind, and hard of hearing (DDBHH) individuals experience barriers to accessing cancer screening, including ineffective patient-physician communication when discussing screening recommendations. For other underserved communities, culturally and linguistically aligned community health navigators (CHNs) have been shown to improve cancer screening and care. A needs assessment study was conducted to identify barriers and gather recommendations for CHN training resources. A community-based participatory needs assessment was conducted from May 2022 to June 2022 using three focus groups. Eight were cancer survivors, six advocates/navigators, and three clinicians. All questions were semi-structured and covered screening barriers, observations or personal experiences, perceived usefulness of having a CHN to promote cancer screening adherence, and training resources that may be useful to American Sign Language (ASL)-proficient CHNs, who are also culturally and linguistically aligned. Out of 20 focus group participants, seven self-identified as persons of color. Data highlighted systemic, attitudinal, communication, and personal-level barriers as recurrent themes. The most frequently cited barrier was access to training that supports the role and competencies of CHNs, followed by cultural considerations, access to cancer guidelines in ASL, dialect diversity in sign language, and the health system itself. Unaddressed barriers can contribute to health disparities, such as lower preventive cancer screening rates amongst DDBHH individuals. The next step is to translate recommendations into actionable tasks for DDBHH CHN training programs. As a result, CHNs will be well-equipped to help DDBHH individuals navigate and overcome their unique barriers to cancer screening and healthcare access.

16.
Appl Linguist Rev ; 15(1): 309-333, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38221976

RESUMO

Hearing parents with deaf children face difficult decisions about what language(s) to use with their child. Sign languages such as American Sign Language (ASL) are fully accessible to deaf children, yet most hearing parents are not proficient in ASL prior to having a deaf child. Parents are often discouraged from learning ASL based in part on an assumption that it will be too difficult, yet there is little evidence supporting this claim. In this mixed-methods study, we surveyed hearing parents of deaf children (n = 100) who had learned ASL to learn more about their experiences. In their survey responses, parents identified a range of resources that supported their ASL learning as well as frequent barriers. Parents identified strongly with belief statements indicating the importance of ASL and affirmed that learning ASL is attainable for hearing parents. We discuss the implications of this study for parents who are considering ASL as a language choice and for the professionals who guide them.

17.
Artigo em Inglês | MEDLINE | ID: mdl-38206823

RESUMO

AIMS: Tailored self-management support of hypertension, considering language and communication, is important for minorities, specifically in the deaf community. However, little is known about the experiences of hypertension self-management in deaf individuals who use sign language. This study aimed to explore the factors and processes of self-management in deaf sign language users with hypertension. METHODS AND RESULTS: Ten men and women who used sign language participated in this study. Data were collected using in-depth personal interviews conducted in the presence of a sign language interpreter between November 2022 and February 2023. All interviews were recorded and transcribed for conventional content analysis. Qualitative analyses identified four categories related to the self-management of hypertension among participants: personal factors (chronic hand pain, unique language and communication, and efforts to turn crisis into opportunities), family and socioeconomic factors (family support and financial burden of living), challenges (limited health literacy and alienation from health education), and desire for health education considering the deaf community. CONCLUSION: The results of this study suggest that family support, socioeconomic status, hand pain, and health literacy should be considered for the planning and development of health education on self-management of hypertension in deaf individuals. In addition, this health education requires cooperation with qualified sign language interpreters in healthcare settings.

18.
Data Brief ; 52: 109961, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38229923

RESUMO

Tamil is one of the oldest existing languages, spoken by around 65 million people across India, Sri Lanka and South-East Asia. Countries such as Fiji and South Africa also have a significant population with Tamil ancestry. Tamil is a complex language and has 247 characters. A labelled dataset for Tamil Fingerspelling named TLFS23 has been created for research related to vision-based Fingerspelling translators for the Speech and hearing Impaired. The dataset would open up avenues to develop automated systems as translators and interpreters for effective communication between fingerspelling language users and non- users, using computer vision and deep learning algorithms. One thousand images representing each unique finger flexion motion for every Tamil character was collected overall constituting a large dataset with 248 classes with a total of 2,55,155 images. The images were contributed by 120 individuals from different age groups. The dataset is made publicly available at: https://data.mendeley.com/datasets/39kzs5pxmk/2.

19.
J Multidiscip Healthc ; 17: 171-176, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38222476

RESUMO

Purpose: While the services available to deaf people in the Middle East have yet to be documented, they need improvement in several countries. The aim of this article was to reduce miscommunication between dentists and deaf patients through the introduction of an optional sign language course for pre-doctoral students and faculty of dentistry at King Abdulaziz University (KAUFD). Patients and Methods: All fourth-year pre-doctoral students were invited to participate in an Arabic sign language course. A survey with 11 multiple choice and 38 true/false questions with an "I don't know" option was distributed, both before and two weeks after the course. This survey was extensively validated and pilot-tested before distribution. Results: The response rate was 141 students (84.9%), 49 of which were males (34.8%) and 92 of which were females (65.2%). The pre-doctoral students had a higher overall knowledge score (mean 22.9±14.8) and sign language skills (11.1±1.7) after the course compared to before the course (9.8±7.1, and 3.7±3.3, respectively) (all P-value <0.001). All the pre-course individual questions had lower scores compared to the post-course questions (P-value <0.05). Conclusion: Deaf people might face difficulties communicating at dental health care clinics, which may be improved by equipping dentistry providers with cultural competency training, like this course.

20.
Sensors (Basel) ; 24(2)2024 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-38257544

RESUMO

Sign language is designed as a natural communication method to convey messages among the deaf community. In the study of sign language recognition through wearable sensors, the data sources are limited, and the data acquisition process is complex. This research aims to collect an American sign language dataset with a wearable inertial motion capture system and realize the recognition and end-to-end translation of sign language sentences with deep learning models. In this work, a dataset consisting of 300 commonly used sentences is gathered from 3 volunteers. In the design of the recognition network, the model mainly consists of three layers: convolutional neural network, bi-directional long short-term memory, and connectionist temporal classification. The model achieves accuracy rates of 99.07% in word-level evaluation and 97.34% in sentence-level evaluation. In the design of the translation network, the encoder-decoder structured model is mainly based on long short-term memory with global attention. The word error rate of end-to-end translation is 16.63%. The proposed method has the potential to recognize more sign language sentences with reliable inertial data from the device.


Assuntos
Língua de Sinais , Dispositivos Eletrônicos Vestíveis , Humanos , Estados Unidos , Captura de Movimento , Neurônios , Percepção
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...